A quasi-Newton method using a nonquadratic model
نویسندگان
چکیده
منابع مشابه
A quasi-Newton proximal splitting method
A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and...
متن کاملA parameterized Newton method and a quasi-Newton method for nonsmooth equations
This paper presents a parameterized Newton method using generalized Jacobians and a Broyden-like method for solving nonsmooth equations. The former ensures that the method is well-deened even when the generalized Jacobian is singular. The latter is constructed by using an approximation function which can be formed for nonsmooth equations arising from partial diierential equations and nonlinear ...
متن کاملQuasi - Newton Trust - Region Method
The classical trust-region method for unconstrained minimization can be augmented with a line search that finds a point that satisfies the Wolfe conditions. One can use this new method to define an algorithm that simultaneously satisfies the quasi-Newton condition at each iteration and maintains a positive-definite approximation to the Hessian of the objective function. This new algorithm has s...
متن کاملA Stochastic Quasi-Newton Method for Online Convex Optimization
We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperfor...
متن کاملA Stochastic Quasi-Newton Method for Non-Rigid Image Registration
Image registration is often very slow because of the high dimensionality of the images and complexity of the algorithms. Adaptive stochastic gradient descent (ASGD) outperforms deterministic gradient descent and even quasi-Newton in terms of speed. This method, however, only exploits first-order information of the cost function. In this paper, we explore a stochastic quasi-Newton method (s-LBFG...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Computational and Applied Mathematics
سال: 1994
ISSN: 0377-0427
DOI: 10.1016/0377-0427(92)00115-p